We’ve used a couple of different usability tools at various stages through the library website project.  We’re fortunate in having access to a high level of expertise and advice/guidance from colleagues at the University’s Institute of Educational Technology .  This means that we have access to some advanced usability tools such as eye-tracking software.

We’ve used two different tools.  Firstly, Morae usability software, which we have on a laptop, and is used to track and record mouse movements and audio commentaries.  This is quite portable and allows us to do some small-scale testing.  We most recently used it for some of the search evaluation work.  Its limitation is that although it tracks what people do with the mouse it may be very different to where they are looking on screen.  

At a workshop I was at the other day, people talked about users scanning web-pages in an ‘F’ pattern, so would scan across in two horizontal lines followed by one vertical line on the left.  This implies that they will pick up items in the left hand column and across the top quite easily.   This was something reported by Jakob Nielsen here back in 2006, with some sample heatmap screenshots shown below.

For the latest testing, we’ve been able to use the Tobii eye-tracking system in the IET Usability labs, which as the name suggests track the users eye-movements about the screen and give a much richer indication of how they are interacting with the website.  So you can show where users are looking as a heatmap, as shown in Jakob Nielsen’s example above, or alternatively you can show gaze opacity.  This shows where users are looking in white, with the more opaque the white the more time their gaze is concentrated on that location.  So places that aren’t being viewed show up as black. Website gaze opacity image

The example shown is from the latest library website testing and you can quite clearly see the same sort of  ‘F’ shaped scanning behaviour on one of the sub-pages on the website.  Looking through some of the other pages then it isn’t always quite as clear cut. 

Keren, my colleague on the project team who has been running the usability testing stage is currently going through the images and the transcripts/notes at the moment and will pull out of it some recommendations to modify the website to address any areas that users found difficult to use.  These recommendations then get reviewed by our Website Editorial Group and prioritised as to whether they are high priority and need to be fixed before the site can go live, or of lesser priority and can be resolved over a longer time period.

The value of this sort of testing is quite high as it isn’t really until users actually engage with your website and try to use it in practice that you really find out how well it works.  It is time-consuming, in that there’s some organising to do to find people to take part in the testing (in our case a research panel to go through to get approval and then emails out to students).  It also takes time to write and fine-tune the scripts that will be used for the testing, and then time to carry out the testing and then to analyse the outputs, but that time is well-spent if you want to understand how easy users will find your site to use.