You are currently browsing the monthly archive for October 2011.
We’ve used a couple of different usability tools at various stages through the library website project. We’re fortunate in having access to a high level of expertise and advice/guidance from colleagues at the University’s Institute of Educational Technology . This means that we have access to some advanced usability tools such as eye-tracking software.
We’ve used two different tools. Firstly, Morae usability software, which we have on a laptop, and is used to track and record mouse movements and audio commentaries. This is quite portable and allows us to do some small-scale testing. We most recently used it for some of the search evaluation work. Its limitation is that although it tracks what people do with the mouse it may be very different to where they are looking on screen.
At a workshop I was at the other day, people talked about users scanning web-pages in an ‘F’ pattern, so would scan across in two horizontal lines followed by one vertical line on the left. This implies that they will pick up items in the left hand column and across the top quite easily. This was something reported by Jakob Nielsen here back in 2006, with some sample heatmap screenshots shown below.
For the latest testing, we’ve been able to use the Tobii eye-tracking system in the IET Usability labs, which as the name suggests track the users eye-movements about the screen and give a much richer indication of how they are interacting with the website. So you can show where users are looking as a heatmap, as shown in Jakob Nielsen’s example above, or alternatively you can show gaze opacity. This shows where users are looking in white, with the more opaque the white the more time their gaze is concentrated on that location. So places that aren’t being viewed show up as black.
The example shown is from the latest library website testing and you can quite clearly see the same sort of ‘F’ shaped scanning behaviour on one of the sub-pages on the website. Looking through some of the other pages then it isn’t always quite as clear cut.
Keren, my colleague on the project team who has been running the usability testing stage is currently going through the images and the transcripts/notes at the moment and will pull out of it some recommendations to modify the website to address any areas that users found difficult to use. These recommendations then get reviewed by our Website Editorial Group and prioritised as to whether they are high priority and need to be fixed before the site can go live, or of lesser priority and can be resolved over a longer time period.
The value of this sort of testing is quite high as it isn’t really until users actually engage with your website and try to use it in practice that you really find out how well it works. It is time-consuming, in that there’s some organising to do to find people to take part in the testing (in our case a research panel to go through to get approval and then emails out to students). It also takes time to write and fine-tune the scripts that will be used for the testing, and then time to carry out the testing and then to analyse the outputs, but that time is well-spent if you want to understand how easy users will find your site to use.
In the last few weeks I’ve taken part in a couple of activities that involved the use of ‘personas’. If you’ve not come across personas before, then they are a made up person, with a name and a personal history, that represent one of your key client groups. [There’s some good information about personas here on the usability.gov website]. Personas are a really a useful service design and usability tool as they allow you to visualise your service through the eyes of one of your users.
Typically your persona would have a name, a photograph (because it makes them easier to relate to), and a back-story: educational background; employment status; personal circumstances; aspirations and motivations; for example. It’s also good to have things such as whether they use social networking and what sorts of things they are interested in. Generally you’d also want to try to categorise them with a short phrase that makes it easier to discuss them. Often you’d create a set of sheets or cards with the details of each persona.
In the two exercises I’ve recently taken part in they have been used to look at two very different stages of the website design process. Firstly as a demonstration of their value in website design and usability, looking quite specifically at a website to see what elements of a particular page were going to be of use to different personas (and also which elements were not going to be relevant). To use the personas you have to put yourself in their place and look at your website through their eyes. So what are they looking for, what is their level of experience or knowledge, for example. It does throw up some really useful insights into how your website is viewed by users.
In the second exercise, the personas were being used at a much earlier stage of the design process, to help look at priorities for the future direction of the Virtual Learning Environment by thinking about what the attitudes of different personas would be to a set of statements about developments. That was quite a useful exercise as it allows you to think about how your users and potential users will react to or view things you might develop. Hopefully it would prevent you contemplating developing services or features that wouldn’t be wanted by users.
Personas have been used for a while to look at both websites and the VLE at the University. To an extent they have been created around market segments and with students/potential students as the primary focus, but there are plans to develop others and to make personas much more widely used as a design tool. So although the ones that exist are of use when planning and developing a library website, there are a few missing for our purposes as we’ve also got staff and researchers to consider.
Although it’s now a bit late in our website design process to use personas for the design stages I’m certainly thinking about using them as a tool to check and review the site, and will see whether we can use them much more in future. A useful tool for website design.
We’re in the middle of a set of usability tests as part of our work on the new library website and my colleague who is running this work suddenly came out with the comment ‘the user is not broken’. It wasn’t a phrase I’d come across but it seemed to perfectly sum up what was the right attitude to why we do usability testing.
The user is not broken.
Your system is broken until proven otherwise.
That vendor who just sold you the million-dollar system because “librarians need to help people” doesn’t have a clue what he’s talking about, and his system is broken, too.
It seems to me to have exactly the right attitude to bear in mind when you are testing your website. You have to build your website so it can be used by your users and you shouldn’t have to provide them with training to use it. So if usability testing identifies features that users cannot easily use then those features are broken. And that is a tough thing to accept. The standard library approach (and I’m not sure if it is peculiar to libraries and librarians) is that we will provide helpsheets, guidance, training sessions and signs to help users, i.e. we try to ‘fix the user’ as if they were broken. But if you look at the commercial web world (Apple with their iOS 5 upgrade for example) then they launch their website or software, provide some information about the features, but don’t ever really offer lots of training on how to use it. Maybe that is a product of extensive testing and confidence in their product but I’m not so sure that that is it.
In part, at least I think there is a matter of scale at play. If you run a physical library and your users visit your library building then you do have day to day contact with your users, but even so, you don’t talk to every single person who comes into your library, or provide them with individual guidance. They might see a helpsheet or leaflet, but they are more likely to use your services by trial and error or following what other people do. With a virtual library you actually talk to a tiny fraction of your user community and can only realistically be able to provide training to a handful of users. So your systems, websites etc have to work, with a minimum of on-screen guidance. Have you ever seen a user guide to a cash machine? No?, thought not.
Having read some of the problems that people have been having with their upgrades to iOS 5, it was with a bit of trepidation that I upgraded the ipad yesterday. Particularly as I’d had a few problems when upgrading to iOS 4.3.5. I fully expected it to take several hours with all manner of problems. But actually it was pretty painless given the size of the upgrade.
After plugging the ipad into the laptop and running up itunes it needed an upgrade of itunes on the laptop to the latest version first, so that took a little bit of time. And I hadn’t sync’d the ipad to the laptop recently so it had to do that, but once I got into the upgrade process the backup process seemed to take more time than doing the actually upgrade. It seemed to take no more than an hour. It may have been less but I’d got so used to the ipad restarting, going blank and doing it’s own thing that I didn’t realise that you had to finish the process off on the ipad. The ipad went blank and nothing appeared on the laptop so it took me a while to work out that I needed to press the button on the ipad and then through several screens to finish the setup. But the process seemed to run pretty smoothly for me on the whole.
The only problem I had in common with many people was the iCloud setup, but afterwards I was able to go back into it and configure that, although I’ve not yet had time to use it to backup the device. There’s not much on the ipad so it’s within the free 5gb limit but I think I might wait to think about whether I want everything copied there.
But overall the upgrade went really smoothly, and pretty much all my downloaded apps worked fine afterwards. This time I did the update on a faster network and from the laptop I’d originally setup the ipad from. So maybe that was the difference.
Having completed the upgrade there are quite a few changes that are evident on the ipad. Most obvious amongst them are the appearance of several new icons on the home page: Messages, Reminders and Newsstand, while the ipod icon has changed to a Music one. [If you want to look through the iOS 5 features then they are detailed on the Apple site here]
Behind the scenes there are quite a few changes that I’ve noticed already. So the Music section now defaults to displaying your music in albums. I wondered idly whether that said something about the age of whoever decided that the album view would be the default. If you’re used to buying music track-by-track then the album view probably isn’t the way you think of music. It sounds a bit old-school, physical-format centric maybe?
Safari now has tabs which is great (and not before time to be honest) but there still seems to be a limit of 9 tabs, even though you still get the + sign it doesn’t seem to open any more tabs (for me anyway). You can also tweet links from the browser as well as email them.
The Message feature I find a bit strange. It lets you message iPhones, iPod touch and iPads, but nothing else. Presumably because it links in with the Apple account. At the same time as the iOS 5 integrates twitter support more completely (although the twitter app still doesn’t take advantage of the larger screen of the ipad), it isn’t all that obvious why Apple would introduce a new messaging feature in what is already a pretty crowded marketplace (with SMS, twitter and Google+ providing multiplatform support). Making it available through iTunes might have been a good idea maybe? I noticed also that even if you turn it off from the Settings then the icon still remains on the home page.
I gather that there are 200 new features in iOS 5 so it’s going to take a while to find them all out. Amongst the General > Network settings is a VPN feature that I don’t remember before that looks like it could do with checking out to see whether it can be connected to the VPN. It suggests to me a bit of thinking about the ipad as a device for business and what would be useful to make it easier to use for business. Well, easier network browsing as an integral part of the ipad would be good to see in the longer term and some multiuser capability.
Overall, I’m pretty happy that the upgrade went OK, and there’s a host of new features to play with. As a final thought, actually, that’s a new operating system version as a free upgrade, so that’s pretty good.
I blogged nearly a month ago some reflections on our latest funding bid https://libwebrarian.wordpress.com/2011/09/15/reflections-on-our-latest-funding-bid/ and sitting at the FOTE conference yesterday an email popped up with the outcome of the bid. [I’m not sure why but I’m still not really used to the pervasive nature of modern email access. I suppose although there has been remote access to systems for a long time, through dial-in, VPN and suchlike, maybe there has always been a bit of a process involved in logging into the VPN or a website and then opening up an email client that seemed a bit laborious. Or at any rate laborious enough to be able to put off doing it. But with tablets and smartphones email setup, email just appears along with tweets and other messages. It just seems a bit easy now. I guess perhaps I’m still not used to the ease of working remotely now, something you take for granted. But sometimes when you think about it, it’s actually something pretty remarkable.]
Anyway, I was a bit surprised to hear about the outcome of the bid so quickly, but really pleased to hear that we were successful. So, something else new to do, that’s really exciting for us and I’m sure I’ll probably blog a bit about in the next few months as we get going on project MACON.
I was at the Future of Technology in Education conference on Friday. Run by ULCC at Senate House in London, it was well attended with over 300 people there. It was the first time I’d been to FOTE. It had been recommended as a really good conference so I was interested to see what it was like (and it was a good chance to get out of the office and stop thinking about the new library website for a day).
Reflections on the day
It was a good conference and I’d hope to go again. I’ll probably blog later about my reflections on the content of the conference itself as there are a few thoughts about the way FOTE was run, that I found really interesting. Firstly, it was pretty much paper-free, with the exception of the name badge which actually unfolded to reveal the conference agenda and details of the conference hashtags. That was a really neat approach, no A4 printed out agendas or bulky bits of paperwork to carry around with you. The only other bit of paper, a playing card for their ice-breaker game.
What was really novel was the creation of a set of FOTE mobile and web apps, for iphones, ipads, android and web.
These had the delegate lists, agenda and details of the speakers, details of the conference location, sponsors, as well as several feeds for the FOTE blog, comments and twitter feeds. They even include the delegate survey for the conference and the voting for one of the sessions. I’m really very impressed with the thought that went into this approach. It’s the first technology conference I’ve been to that seemed to have got to grips with understanding that as the audience was going to be coming armed with an array of ipads, laptops and smartphones, that giving them bits of paper to carry around wasn’t the right message. I wonder how the cost of creating the apps compares with the cost of printing out copies of various bits of paperwork or conference packs that people traditionally give out but it was a really impressive thing to do and it would be good to see it taken up by other tech conferences.
My second reflection was that when you got to the conference venue, the wifi access code and links to the various conference apps were up on posters and displayed with QR codes, making it really easy to link from a smartphone. That was a good touch. Conference wifi access was pretty good and reliable considering the number of devices in the conference, I suspect probably more wifi connected devices than delegates.
Final thought was about the twitterwall used for part of the conference sessions. The transition from one tweet to another was eye-catching. The previous tweet would clear, often with those letters falling down the screen, leaving just the letters that were in common with the next tweet, which would then appear. It was a good visual effect although possibly a bit distracting from what was going on in that session maybe.
I do find it fascinating the way that different universities approach the wifi access issue for conference delegates. ULCC had a separate conference wifi SSID and what seemed to be a daily access code. But there seem to be a few different approaches. Maybe something to blog about another time.