Tip: Tracking you Flash/Flex site[UPDATED]

There's a lot of fuzz surrounding this topic. So why should we bother with tracking what users do to our applications and games on the web? Well, there's a lot of reasons for doing this, however the main motivation should be to gain knowledge about the user friendliness of you creation. Do the user understand how to use the site/app, or does he give up after a short try? If your in the advertising business there also be alternate motives, but then you'd probably know what you're looking for..so.. I'm just going to outline how to track you app.
The first thing you need to do is to head over to Google Analytics(GA) or a similar service unless you want to develop you own tracking system, something I will not touch with a ten feet fork in this article..
GA is pretty straight forward to set up, and once you have managed to set up tracking on the html page that will host your swf file it's time to get into the clever stuff. The thing is, to track something from ActionScript 1/2 vs AS3 are two different things. In AS2 you would simply use the getURL("javascript:pageTracker._trackPageview('myTrackingPoint');"). This will then add a tracking point called «myTrackingPoint» to GA, in AS3 it will look like this: ExternalInterface.call("pageTracker._trackPageview","myTrackingPoint"). This difference can be traced back to that AS3 has a different and much more powerful interface to talk to JavaScript.

To use this in an effective manner I highly suggest using some kind of folder structure when tagging your apps. Like e.g. if you have 5 buttons in your main menu, then track them like so:
ExternalInterface.call("pageTracker._trackPageview","mainmenu/button_home");
In this case GA will create a folder structure which makes navigating within the GA report much more useful.
Okay.. now your on your own..

User friendliness - a matter of taste?

Sneaking about on the WWW I tend to find all these articles dealing with something the authors call user friendliness in computer programs. Most of the articles deal in the differences between the OS X interface and the Windows interface. But they often tend to generalize, thereby and hence talking about user interface design in general. And that's nice, that people care about how things look, but that's also the problem. 90% of the articles dealing in user interface design focus on visual design. Now, visual design is only a small part of the user interface. Indulge me while I explain why it's important to know this.

User interface design is not if that button should be red or blue, or it's that too, but not only. The isolation of visual elements is called graphical design, which is something quite different from user interface design. Since my field of interest lie within the world of RIA applications, I'll use this as my point of origin for this statement.
When you sit down on your trusted behind and start thinking about how your RIA application should behave, you need to take into consideration the following elements: IA/SD, UID/HMI and GD. So, that's a lot of nice not disclosing concepts, here's what they mean.

IA - Information architecture. SD - System design(systemization):
In RIA development, these two are actually the same thing. In other areas like website development, IA will adhere to a slightly different path than SD, but in most cases, and in this case, they are the same thing.
The science of expressing a model or a concept of an entire system is the shortest possible definition of these concept I cold come up with on the fly. This means using a modeling approach to break down large amounts of data to a more readable format in which logical paths of data can be seen in the development and design of the application. In others words, trying to predict the users movement trough the system. This approach lays the foundation for usability and findability in applications, websites and databases.

UID - User interface design. HMI - Human machine interaction:
Like IA and SD, UID and HMI also go hand in hand, the only difference lies in more empirical approach of UID, while HMI is much more theoretical. UID is, as the expression denotes, a collection of rules on how to design the user interface of your application, like buttons and boxes and stuff. The rules that make up UID are derived from HMI principles, like Fitts' law, reaction time and so forth. Let me stress this point, these rules are measurable data, it is not a matter of taste. The HMI principles are built upon years of research into human physiology, psychology and human habits. So when the UID principle state that the OS X menubar on the top of the screen is a better way to go than attaching the menubar to each and every window(Windows), we know this because we can actually prove it trough a HMI principle. And therefor we simply adheres to the UID/HMI principles when designing an interface.

GD - Graphical design:
This is not a matter of taste either. Surprised? Well, GD is kinda a matter of taste, but there are lot's of rules in this area too. It's a reason why you are reading this black text on a white background you know. Graphical designers knows that this work, and they stick to it most of the time. In this area you encounter subjects like readability, fonts and so on. But do not get me wrong, graphical designers do not only deal in colors and fonts, they also need to know stuff from the UID area, how else would that know how to design a button that the user actually will understand how to click.. The transition between UID and GD is blurred, and I think it should be. The result of all these fields coming together will result in a efficient, understandable, beautiful and intuitive user interface.

So here is my point in all of this. You cannot simply look at an application and say that it's ugly and therefore it must it can't be user friendly.. it may very well be extremely user friendly. And then again, a really pretty application with fancy stuff in it may be completely useless. User friendliness is not a matter of taste, it's a matter of skill and knowledge, plus a touch of magic.

No way to dodge it - The Leopard review

So here it is, the worlds most advanced OS, Apples OS X 10.5 Leopard. By now it has already been written a bunch of reviews into the crooks and crannies of Leo, so this will not be a feature tour of the entire OS, click the link at the top to experience that. Instead I'll take a brief look at what's important to me in this release, and why I plunged straight into it and upgraded both my Macs to run OS X 10.5.

My main excitement in this release was in the new Mail application.. yeah, geeky right? But the ability to take notes, create todos, and have everything beautifully organized and synced with a simple IMAP account was a very appealing idea to me. And boy do I use it, I love the new Mail application, not so much the stationaries perhaps, but everything else comes of as much more efficient and polished compared to earlier releases. So I did expect Mail to be great one might say, but I did not expect CoverFlow to be as handy as it turns out to be. This is the ability to see your documents and files as content previews ordered in a sort of shuffle stack mode. Incredibly handy when browsing media files! And it's all powered by QuickLook, the new brilliant way of presenting the content of any file, anywhere. You no longer get an icon for a document type, you get an actual preview of what's inside of the document instead, and further more.. a touch of the space bar and voila, full screen preview of any file without opening a single program.

Any bad stuff then? Of course there is, Apple makes mistakes like any other company developing a piece of software spanning millions of lines of code. However, not many mistakes in this release. Some people are arguing that the dock and the menu bar has a weird graphical appearance.. well, the thing is, people always fear change, and this is one such case in my opinion. There is nothing wrong with the new desktop, it's just different.

If you need one reason besides those already mentioned to upgrade; Spotlight! It's blazingly fast and you can search any shared computer.

A more in depth review of Leopard can be found at Ars Technica.



Review: Pixelmator

The brand spanking new Pixelmator from the Dailide brothers adds a new member to Delicious generation software pool. So, this is an image editor app for OS X, utilizing Apples core image processing routines. It's not a Photoshop replacement, yet. But it has the potential of becoming one.

I'm a vigorous Photoshop user, so naturally I'm accustomed to the tools provided to me in that app. Containing in that mode, Pixelmator was a no brainer, the menus and tools where almost exactly where I expected them to be located. There is one huge difference in the UI though.. Pixelmator is far better than Photoshop, not only does it look prettier, it's also more logical, cleaner and does make for a sleeker experience. You won't get scared when being presented with the Pixelmator interface, you'll go "vow, way cool man!". Where when opening Photoshop for the first time you'll go "ehm, do I really need this".

However.. this is the first release of Pixelmator, and it is slow.. sometimes too slow, even on my iMac Core2Duo. Photoshop is much faster..and that matters. But as a whole I do believe that Pixelmator will become a Photoshop competitor..in time. Check it out at www.pixelmator.com!

Windows better on Macs? Is Apple getting greedy? And leave Steve Jobs alone!

An intriguing blog post by Steven Frank co creator of Panic software, discussing if Windows runs better on a Mac. He also shares some insight into why OS X is in fact a better OS compared to Windows.

Also, veteran Will Shipley from DeliciousMonster thinks Apple is getting greedy, is he right? Lot's of good stuff in his post.

But of course, our all time favorite Apple fan Justine, just wants us to leave Steve Jobs alone.

Embrace, extend and extinguish

The three e's, Embrace, Extend and Extinguish. Not many people besides those already up into their neck in nerdyness knows what this term describes. In short, this is a term used by Microsoft to describe their strategy to do a hostile takeover in areas of the computer industry where standards are widely in use..like the Internet. The term was actually first made public by the U.S. Department of Justice.

It works like this: You enter a product category, e.g. the www with a browser. Let's call it Internet Explorer. This is the embrace part, when you enter the area, seemingly on fair terms. But then, when your browser has become widespread, then you start to add proprietary abilities to it. Let's call it ActiveX for example. These new features does of course implement functionality not found in competing products, like other browser, and therefore companies will start to use this closed technology, for example for online movie rental. But wait, isn't this a great thing, adding features to browsers.. Sure, except the technology added is closed, so no other browser can ever support it. And then, we enter the final stage. Since the MS browser now has all these features not supported by other browsers and there is a de facto monopoly in some areas of browser technology, like DRM video.. this gives MS the power to extinguish the competition. And this is why IE is the most widespread browser in the world... E.E.E.
In 2004, to prevent a repeat of the "browser wars," and the resulting morass of conflicting standards, Apple (makers of Safari), Mozilla (makers of Firefox), and Opera (makers of the Opera browser) formed the Web Hypertext Application Technology Working Group to create open standards to complement those of the World Wide Web Consortium. Microsoft has so far refused to join, citing the group's patent policy, which requires that all proposed standards be possible to implement on a royalty-free basis.

But why do they do this? It's not like they need the money, and IE is a free browser..so no money to make there. The thing is, the business strategy utilized by Microsoft is based on markedshare. Unlike other companies like Apple and Sun, which base their business plan on profit, Microsoft move money around from areas where they make the money to areas where they loose, all to gain markedshare. The idea is to run competitors into the ditch using size and brute force, effectively creating a closed standard like in the case of Microsoft Office. This way Microsoft don't have to worry about the profitability or quality of their product, people have to buy them, cos there is no other alternative.

Medal of Homer



"The Simpsons game, coming this fall. To every platform ever made!"

Priceless!

Quicktip: Getting the GET params in ActionScript

So, you want to get the content of a GET variable, like «http://nytimes.com/?id=dork» the id from the URL of that web page. Using a server side script like PHP in conjunction with ActionScript you can use $_GET['id'] and print that to a FlashVars on the page in which the swf file is embedded. That sounds like a tedious solution, and what if you can't control the server side scripting? Well, good news, using the spanking new ExternalInterface api for ActionScript 3 we can get the id value passing a bit of JavaScript.

var myStr:String = ExternalInterface.call("window.location.search.substring", 1);

Note that this will accually give you the entire string "id=dork", so you will have to split it on the equal sign.

var myParams:Array = myStr.split("=");
var param:String = myParams[1];

There you have it.. no server side scripting, only ActionScript and javaScript, all from within the one swf file.

Fitts' law and RIA

To my amazement, it seems like the usage of Fitts' law in Internet applications and websites is a rather controversial topic these days. There are critics out there claiming that Fitts' law is an old and outdated approach to designing user interface elements.
Well.. the law of gravity is also getting old, but guess what.. it's still relevant. I really don't insinuate that Fitts' law and the law of gravity are the same, but the point being made here is that Fitts' law is just that, a law describing physical movement. This is not an idea made up by a besserwisser trying to revolutionize the UI of computers. I believe that the biggest problem with Fitts' law, is the fact that most people working in UI design, and particularly in RIA and web design, have either little or no knowledge of the actual content of Fitts' law.

So what is it? In short, Fitts' law is model describing human movement. It was described by Paul Fitts back in 1954. Despite to popular belief, Fitts' law isn't all about making things bigger on screen. As this is a part of the law, the actual formula describes the relationship between distance between the starting point and the target, and the size of the target. And you have something called infinite width, which I will describe in more detail later.

Mathematically speaking the law states that «Time = a + b log2(D/S+1)». Where D is the distance from where your cursor is currently at to the target, and S is the width of the target. The a and the b are constants. It's not that hard now is it, but yet extremely powerful when taken in to use. So how do we use it? Let's start of with the words of Bruce Tognazzini.

While at first glance, this law might seem patently obvious, it is one of the most ignored principles in design. Fitts' law (properly, but rarely, spelled "Fitts' Law") dictates the Macintosh pull-down menu acquisition should be approximately five times faster than Windows menu acquisition, and this is proven out.

What TOG is talking about here is the infinite width. What that means is that, like in the Macintosh menubar, the menu is at the top of the screen, always. This will let the user move his cursor at full speed to the top of the screen, because no matter if the user keeps pushing the mouse upwards, the cursor will stay in the same place, hence making a target of infinite width.

But let's drop the Os X vs Windows crap. Let's instead concentrate on how to apply the principles of Fitts' law to our RIA apps and web sites. Now note that these are only my suggestions on how to apply Fitts' law, so please do your own research. My opinions are largely based on my studies of HCI from the university and later experiences in building applications, RIA apps and web sites.

1. Contextual menus. If you are going to build a contextual menu, make it a pie menu. I don't really like contextual menus, because they hide the interface from the user, but it seems they're here to stay so let's make the best of it. Well, a pie menu will be faster than a dropdown menu. The reason for this is that you minimize the distance the cursor have to travel by centering the menu under the cursor at creation time. Additionally, pie pieces make for bigger targets than lines of text in a drop down menu.

2. Buttons. Group your buttons together in that order the user is most likely to use them. Commonly used buttons should be bigger and more accessible that less important and less used functions.

3. Dialogs. Attach your dialog windows to the window it belongs to. In this way, the user will know where the dialog will appear and the movement can therefor be "trained". Fitts' law doesn't actually describe trained movements at all, but this is important in conjunction with "the easy to hit target", this explains it's relevance in this context.

4. Edges. If you are in control of the mouse pointer. E.G a full screen app. Then make use of the edges to place important UI elements. Edges are much easier to navigate to because of the infinite width. But don't use both horizontal/vertical edges of the screen so the user need to move the cursor over the entire width..that would be bad. For both right and left handed people, the upper left edge of the screen is the spot most easily navigated to.

5. Drag and drop. Since the muscle tension increase when doing drag and drop operations(holding mouse button pressed), it's a good idea to minimize the distance the user has to drag something to drop it. You could for example pupup a drop-box next to the item when the drag operation starts, hence minimizing the distance.

In my opinion, Fitts' law is an easy to implement principle that requires some thought and consideration, but it's worth it.

Follow up: Flash and H.264[updated]

As a follow up to my recent article concerning my guess that YouTube was moving over to the H.264 codec, leaving the Adobe flv format behind. Well, it looks like it's true, kinda.. only not. Today Adobe announced in a press release support for the H.264 codec within the Adobe Flash Player version 9.x. Adobe also adds support for HE-AAC which is a part of the MPEG-4 standard like H.264. The press release do not mention in specific words on what wrapper will be supported. Will Flash Player add support for QuickTime directly, or more likely, will the H.264 codec be wrapped by the flv format. [Update: After reviewing and testing the new Flash player it is now clear that it does indeed support all h.264 formats, m4v, mov and so on. There is no new API to be used, you can use it directly trough the netstream object like you would with any flv file]

This is really great news for Flash developers as this release laster this fall will make users adapt the updated Flash Player at a faster rate, due to the almost inevitable adoption of this codec by YouTube and other online based video providers. The added support for H.264 will also mean the utilization of HD video from within Flash based apps and presentations.

A beta will be released today at labs.adobe.com

Quicktip: Flipping an image in Actionscript

Ever needed to flip a dynamically loaded image in Flash or Flex? Like for making mirror effects and so on..
There is no property in ActionScript 3 called "flipImage", but there is one easy way to do it.

In AS3, to flip an image horizontally, go like so:

var flipper:MovieClip = new MovieClip();
//Load your image into the flipper MC
flipper.scaleX = -1;

This works because you're actually resizing the image beyond zero, literately turning it inside out.
PS: This will not work with components, like the UILoader.

Flash vs Flex, what to choose when

If you're a ria developer like me you've probably encountered this situation at least once. You have this somewhat large project to do, there are lots of interactive elements to it and the target format is swf. If you don't already know this, both Adobe Flash and Adobe Flex delivers in the swf format, the same player and all. The only real difference dwells in the approach in which you choose to take developing the finished product. My approach to this would be to first define what kind of project this will be. Will this end up to be a presentation (1), driven by animations and not so much depended on user interaction to work. Or will it become a (2) true ria application, containing lots of interactivity and projecting a user depended interface.

In the first case, your choice should be Flash, as this tool has a timeline and is in fact intended for online presentations with some interactivity. Flash is great for presenting video, small games and other project that can be clearly defined as more depended on the designer rather than the developer.

In the second case you could in fact go for Flash as well, which will be slow and horrible, but it will do the job and the end product will most likely end up looking almost identical. However, when doing developer driven project, the tool used should be a developer tool. Enter Adobe Flex. The UI of Flex is dramatically different from Flash, focusing on code to a much larger extend and totally abandoning the timeline. Because Flex is build like any other developer tool, like xCode and Visual Studio, most developers will feel much more familiar with the UI of Flex. Another thing to consider is development time. Since Flex features layout elements made especially for making applications, it much easier to set up an UI that will shrink to fit the needs of the user. And Flex has even more trick up it sleeves, it can handle swf's exported from Flash. So really, you can have the designer make custom elements all the way trough and set it up visually in Flex. However, most designers will probably end up using Flex powerful CSS support instead, as it's quicker and makes for a more consistent user interface.

The difficult thing here isn't really to convince a designer to work in Flex, or a developer to work in Flash.. the hard thing is to define which is the right tool for the project you're currently working on. Well, good luck with that then ;-)

YouTubes secret move

YouTube is a great service that allows people to both upload and watch videos. The thing which makes YouTube stand out from all other similar services, is the substantial implementation of so called social tools, like commenting and rating functions. However, even though YouTube is an immensely popular service, it has one problem: Video quality. The videos seen on YouTube are often in bad shape even before they are transcoded into the Flash video format (flv). To encode a video into the flv format, one most commonly would use the On2Vp6 codec, so does YouTube. Even though this codec is substantially better that the Sorenson Sparc which it replaces, Vp6 do produce grainy and bad picture quality at low bitrates.

Now something is stirring within YouTube as a response to this quality issue. If you've been following along into the dealings of Apple Inc. lately you'd notice that they have released a score of products closely integrated with YouTube. Both AppleTv and the oh so fantastic iPhone both hold YouTube functionality. But.. none of them have the ability to play back the flash format. Lately, the newly risen iLife 08 (iMovie) also include the choice of publishing your movie directly to YouTube. No doubt this functionality is a result of Apple and Google being in cahoots(Google is the owner of YouTube), but the interesting part here is that all Apple products which can upload to and download from YouTube utilizes the H.264 codec from the MPEG group.

Since the H.264 codec is far superior to the On2Vp codec and the H.264 is open to everybody, there are now some speculation YouTube is slowly converting it's material to be encoded into H264 instead of Flash. This would mean a significant gain in quality for YouTube material, and could be a necessity if YouTube is to target home entertainment systems with large monitors.

I think this would be a smart move by YouTube, because let's face it, Flash video isn't exactly impressive in it's quality versus size ratio. But, as YouTube and services like are the driving force behind getting people to upgrade their Flash plugins, this may entail that Flash designers and developers will have to dig into QuickTime development, as QuickTime is the only real alternative for playing H.264 video online today. Well..let's get to it then..

Making all others look like amateurs

So, once again Apple rocked the community with a full score of new apps, ranging from the most excellent iLife '08 to the gigantic and extremely well written Web 2.0 app called .Mac Web Gallery. Everything appears to been in development for several years, however they did it in one year. And in a couple of months Leopard will arise as well. Compare that to Microsoft's five year Vista "experiment". So how the heck are they able to do this time and time again, staying ahead..
The answer is so simple..

If you have read or watched Apples "Think different commercial" you already know why Apple stands out to this extent. The people that actually build their software, engineers like Scott Forstall, designers like Mike Matas, they aren't exactly what you would call an average software developer. The front most profiles, the decision-makers within Apple are front-runners for OpenSource development, they often come from small independent ideological companies like Delicious Monster and OmniGroup. Not what you would expect from a company making it's living by making software and hardware. Other similar companies keep their cards close, for good reason. If you make an invention you can't simply make it available for all, then it can be copied and suddenly you're out of customers..right? Well, Apple has realized that if you find the best people in the world, and somehow manage to combine their talents, others will find it hard to follow, no matter if their holding the source code.

"..no respect for the status quo" it says at line 8 of "Think different". And this is exactly why Apple have so devoted fans, because they really change things. They don't make something that people buy and then sit back and look at the money flooding in, they keep on working, making things better..

It's very exiting to see Apple moving into the Web development community in full force now. Well, really they where one of the first to make use of the old HTTPRequest extention, now cleverly named AJAX. (see web gallery)
.Mac Web Gallery is really a desktop app on a website. It mirrors iPhoto perfectly, and it can do everything Flickr can, and then some. The focus is in great design and great functionality, making things easy to use, removing the typical "website look" and replacing it with a familiar and logical design derived from the desktop apps. And of course, all metaphors that exist in the desktop app, browsing, mosaic and the new "skimming", they are all reflected onto the web app. This makes for a familiar and consistent user interface. Cos that is what we really want, something we just can use, not learn to use.

The future of the Internet

This post is merely to underline on my previous claim that desktop applications are really the future of the internet. As you know, if your read my last posting, I tend to think that the cloud will work more or less like a giant back-end system in the future. That is, you have the experience of an desktop app and the content of the internet, all at once. And yea, here are two dudes whom think so too ;-)

RIA - Rich Internet Application

According to Wikipedia: "Rich Internet Applications are Web applications that look and behave like desktop applications." I totally agree, and therein lies the conundrum. Web-designers and web-developers alike have their way of doing things, and so has application designers and developers. Even thought they all are in the same business, the application approach to things have a somewhat longer history to rely upon. The web-developer approach really didn't come into play until the middle of the 1990's. By then application developers already had clocked in about 10 years of experience creating software and UI design. But the much anticipated clash of the different groups of developers and designers was never to come, for the simple reason that they where doing different things in different places. That is, until now. These days it's not longer enough to master "tagging" and Photoshop to satisfy the needs of clients wanting to blow away competitors online. And with the introduction of such tools as Adobe Flex and Microsoft Blend the www is rapidly heading towards application design and development. I haven't forgotten about Google and their go for JavaScript AJAX development, however, that's somewhat outside of the scope of this, as it largely remains web-development.

To develop an application for the web versus the development for an OS, are two different worlds by far. There is not only the technical aspect of filesystems, hardware and low level programming versus the high level web development strategy, it's also the heavily fortified rules of user interface design existing in application development. On the other hand, app developers escape the headaches of making sure their app can run alike in all browsers.
My take on this is to take the best of both sides. UI design guidelines have come a long way on the client side of things. The Apple design manual can by it self carry the entire weight of the user experience. However, the freedom expressed by some web-designers, the ability to throw out old concepts and bring in new ones is something we should have in the mix, as long as these new ideas conform to established truths about what people actually do in front of a screen. Regarding development, I do firmly believe that pure web-developers have a lot to learn from application developers in the fields of efficiency and security.

The problem that still remain, is the speed of the evolution in RIA design. Most people do not have enough computing power to run high end RIA applications. The reason for this is the inefficient browser and the plug in architecture. In a recent interview with Bill Gates and Steve Jobs (yes, together), this question was brought up. Those two disagree on many things, but one thing stood out that they both truly believed in. They argued that in the future the Internet will function more like a giant database, a back end system while the front end part will be handled by client apps running naively on the system of the user. Thereby effectively removing the problem of bad user experience. They make a good point. The thing is, you cannot leap into the future online, cos you can't be sure that people will be able to follow you. And it looks like many developer companies have came to the same realization, just take a look at Air (Apollo), it's Adobes way of moving application development away from the web and onto the desktop, but at the same time keeping the multi platform Flash development environment.

For the future I predict that the web will keep it's role as a place to get information, to advertise and a place to play. But for bigger more complex things I believe that people, given time, will notice the better user experience and the share speed a desktop app can provide. A good example is Apples Google maps client on the iPhone. It's a client running on a phone that actually outperforms the original Google maps client by a factor of ten. As people become more tech capable, they'll demand more, and then we whom are designers and developers should hear their call.. or crash and burn.